47 research outputs found

    Tactile-based Manipulation of Deformable Objects with Dynamic Center of Mass

    No full text
    International audienceTactile sensing feedback provides feasible solutions to robotic dexterous manipulation tasks. In this paper, we present a novel tactile-based framework for detecting/correcting slips and regulating grasping forces while manipulating de-formable objects with the dynamic center of mass. This framework consists of a tangential force based slip detection method and a deformation prevention approach relying on weight estimation. Moreover, we propose a new strategy for manipulating deformable heavy objects. Objects with different stiffnesses, surface textures, and centers of mass are tested in experiments. Results show that proposed approaches are capable of handling objects with uncertainties in their characteristics, and also robust to external disturbances

    Push to know! -- Visuo-Tactile based Active Object Parameter Inference with Dual Differentiable Filtering

    Full text link
    For robotic systems to interact with objects in dynamic environments, it is essential to perceive the physical properties of the objects such as shape, friction coefficient, mass, center of mass, and inertia. This not only eases selecting manipulation action but also ensures the task is performed as desired. However, estimating the physical properties of especially novel objects is a challenging problem, using either vision or tactile sensing. In this work, we propose a novel framework to estimate key object parameters using non-prehensile manipulation using vision and tactile sensing. Our proposed active dual differentiable filtering (ADDF) approach as part of our framework learns the object-robot interaction during non-prehensile object push to infer the object's parameters. Our proposed method enables the robotic system to employ vision and tactile information to interactively explore a novel object via non-prehensile object push. The novel proposed N-step active formulation within the differentiable filtering facilitates efficient learning of the object-robot interaction model and during inference by selecting the next best exploratory push actions (where to push? and how to push?). We extensively evaluated our framework in simulation and real-robotic scenarios, yielding superior performance to the state-of-the-art baseline.Comment: 8 pages. Accepted at IROS 202

    GMCR: Graph-based Maximum Consensus Estimation for Point Cloud Registration

    Full text link
    Point cloud registration is a fundamental and challenging problem for autonomous robots interacting in unstructured environments for applications such as object pose estimation, simultaneous localization and mapping, robot-sensor calibration, and so on. In global correspondence-based point cloud registration, data association is a highly brittle task and commonly produces high amounts of outliers. Failure to reject outliers can lead to errors propagating to downstream perception tasks. Maximum Consensus (MC) is a widely used technique for robust estimation, which is however known to be NP-hard. Exact methods struggle to scale to realistic problem instances, whereas high outlier rates are challenging for approximate methods. To this end, we propose Graph-based Maximum Consensus Registration (GMCR), which is highly robust to outliers and scales to realistic problem instances. We propose novel consensus functions to map the decoupled MC-objective to the graph domain, wherein we find a tight approximation to the maximum consensus set as the maximum clique. The final pose estimate is given in closed-form. We extensively evaluated our proposed GMCR on a synthetic registration benchmark, robotic object localization task, and additionally on a scan matching benchmark. Our proposed method shows high accuracy and time efficiency compared to other state-of-the-art MC methods and compares favorably to other robust registration methods.Comment: Accepted at icra 202

    Intelligent in-vehicle interaction technologies

    Get PDF
    With rapid advances in the field of autonomous vehicles (AVs), the ways in which human–vehicle interaction (HVI) will take place inside the vehicle have attracted major interest and, as a result, intelligent interiors are being explored to improve the user experience, acceptance, and trust. This is also fueled by parallel research in areas such as perception and control of robots, safe human–robot interaction, wearable systems, and the underpinning flexible/printed electronics technologies. Some of these are being routed to AVs. Growing number of network of sensors are being integrated into the vehicles for multimodal interaction to draw correct inferences of the communicative cues from the user and to vary the interaction dynamics depending on the cognitive state of the user and contextual driving scenario. In response to this growing trend, this timely article presents a comprehensive review of the technologies that are being used or developed to perceive user's intentions for natural and intuitive in-vehicle interaction. The challenges that are needed to be overcome to attain truly interactive AVs and their potential solutions are discussed along with various new avenues for future research

    An Empirical Evaluation of Various Information Gain Criteria for Active Tactile Action Selection for Pose Estimation

    Get PDF
    Accurate object pose estimation using multi-modal perception such as visual and tactile sensing have been used for autonomous robotic manipulators in literature. Due to variation in density of visual and tactile data, we previously proposed a novel probabilistic Bayesian filter-based approach termed translation-invariant Quaternion filter (TIQF) for pose estimation. As tactile data collection is time consuming, active tactile data collection is preferred by reasoning over multiple potential actions for maximal expected information gain. In this paper, we empirically evaluate various information gain criteria for action selection in the context of object pose estimation. We demonstrate the adaptability and effectiveness of our proposed TIQF pose estimation approach with various information gain criteria. We find similar performance in terms of pose accuracy with sparse measurements across all the selected criteria

    Touch if it's Transparent! ACTOR: Active Tactile-Based Category-Level Transparent Object Reconstruction

    Get PDF
    Accurate shape reconstruction of transparent ob-jects is a challenging task due to their non-Lambertian surfaces and yet necessary for robots for accurate pose perception and safe manipulation. As vision-based sensing can produce erroneous measurements for transparent objects, the tactile modality is not sensitive to object transparency and can be used for reconstructing the object's shape. We propose AC-TOR, a novel framework for ACtive tactile-based category-level Transparent Object Reconstruction. ACTOR leverages large datasets of synthetic object with our proposed self-supervised learning approach for object shape reconstruction as the collection of real-world tactile data is prohibitively expensive. ACTOR can be used during inference with tactile data from category-level unknown transparent objects for reconstruction. Furthermore, we propose an active-tactile object exploration strategy as probing every part of the object surface can be sample inefficient. We also demonstrate tactile-based category-level object pose estimation task using ACTOR. We perform an extensive evaluation of our proposed methodology with real-world robotic experiments with comprehensive comparison studies with state-of-the-art approaches. Our proposed method outperforms these approaches in terms of tactile-based object reconstruction and object pose estimation

    Neuro-inspired electronic skin for robots

    Get PDF
    Touch is a complex sensing modality owing to large number of receptors (mechano, thermal, pain) nonuniformly embedded in the soft skin all over the body. These receptors can gather and encode the large tactile data, allowing us to feel and perceive the real world. This efficient somatosensation far outperforms the touch-sensing capability of most of the state-of-the-art robots today and suggests the need for neural-like hardware for electronic skin (e-skin). This could be attained through either innovative schemes for developing distributed electronics or repurposing the neuromorphic circuits developed for other sensory modalities such as vision and audio. This Review highlights the hardware implementations of various computational building blocks for e-skin and the ways they can be integrated to potentially realize human skin–like or peripheral nervous system–like functionalities. The neural-like sensing and data processing are discussed along with various algorithms and hardware architectures. The integration of ultrathin neuromorphic chips for local computation and the printed electronics on soft substrate used for the development of e-skin over large areas are expected to advance robotic interaction as well as open new avenues for research in medical instrumentation, wearables, electronics, and neuroprosthetics

    Upward Altitudinal Shifts in Habitat Suitability of Mountain Vipers since the Last Glacial Maximum

    Get PDF
    We determined the effects of past and future climate changes on the distribution of the Montivipera raddei species complex (MRC) that contains rare and endangered viper species limited to Iran, Turkey and Armenia. We also investigated the current distribution of MRC to locate unidentified isolated populations as well as to evaluate the effectiveness of the current network of protected areas for their conservation. Present distribution of MRC was modeled based on ecological variables and model performance was evaluated by field visits. Some individuals at the newly identified populations showed uncommon morphological characteristics. The distribution map of MRC derived through modeling was then compared with the distribution of protected areas in the region. We estimated the effectiveness of the current protected area network to be 10%, which would be sufficient for conserving this group of species, provided adequate management policies and practices are employed. We further modeled the distribution of MRC in the past (21,000 years ago) and under two scenarios in the future (to 2070). These models indicated that climatic changes probably have been responsible for an upward shift in suitable habitats of MRC since the Last Glacial Maximum, leading to isolation of allopatric populations. Distribution will probably become much more restricted in the future as a result of the current rate of global warming. We conclude that climate change most likely played a major role in determining the distribution pattern of MRC, restricting allopatric populations to mountaintops due to habitat alterations. This long-term isolation has facilitated unique local adaptations among MRC populations, which requires further investigation. The suitable habitat patches identified through modeling constitute optimized solutions for inclusion in the network of protected areas in the region

    A Review of Transfer LearningAlgorithms

    No full text
    Transfer Learning AlgorithmsThis work reviews twenty state-of-the-art papers concerning the topic of visual transfer learning.Special focus lies on algorithms and applications of transfer learning on visual detection and classication. In chapter 1, an overview of transfer learning, as well as its applications and general methodology, are introduced. Chapter 2 contains brief summaries and comments on each paper

    Novel Tactile Descriptors and a Tactile Transfer Learning Technique for Active In-Hand Object Recognition via Texture Properties

    No full text
    International audienceThis paper proposes robust tactile descriptors and ,for the first time, a novel online tactile transfer learning strategy for discriminating objects through surface texture properties via a robotic hand and an artificial robotic skin. Using the proposed tactile descriptors the robotic hand can extract robust tactile information from generated vibro-tactile signals during in-hand object exploration. Tactile transfer learning algorithm enables the robotic system to autonomously select and then exploit the previously learned multiple texture models when classifying new objects with a few training samples or even one. The experimental outcomes demonstrate that employing the proposed methods and 10 prior texture models, the robotic hand could identify 12 objects through their surface textures properties with 97% and 100% recognition rate respectively with only one and ten training samples
    corecore